The SRLT builds on established reversal paradigms (Costa et al., 2015; Dombrovski et al., 2010). In this task, participants select between two random stimuli in three 80-trial blocks, for a total of 240 trials. After participants make their choice, feedback appears on the screen related to whether their choice was correct or incorrect (Figure 1). Each block is made up of an acquisition and reversal for three separate stimulus pairings. In the initial acquisition phase, a given stimulus is associated with “correct” feedback 80% and “incorrect” feedback 20% of the time, while the other is incorrect feedback 80% and correct feedback 20% of the time. In the reversal phase, contingencies are reversed such that the previously predominantly rewarding stimulus is now the less rewarding option. Participants learn these contingencies about the same two stimuli for one acquisition and one reversal phase before two new stimuli are presented. Data for this task were collected using the stimulus presentation software Inquisit.
The current cleaning code is housed in the Tidy_functions_PUBS script: refer to these functions for any questions about data cleaning. For any questions related to the definition of task variables, refer to: https://docs.google.com/spreadsheets/d/1V2UD28C_zAfH90BnNO3si6c0-qDUDFbHoU44rfdEt4I/edit#gid=1309555968.
Load Pilot Data…
knitr::opts_chunk$set(message = FALSE)
options(knitr.duplicate.label = "allow")
##Load Packages and source scripts
pacman::p_load(tidyverse, readr, janitor, ggplot2,wesanderson, cowplot, flextable, plotly, emmeans, data.table)
setwd("~/github_repos/PUBS_Data_Verification/")
source("helper_scripts/Tidy_functions_PUBS.R")
#reversal_data_8.16 <- data.table::fread("~/github_repos/ReversalTask/Reversal_pilot_mTurk.csv", fill = TRUE)
#load data
#load("Reversal_Task_Cleaned.Rdata")
reversal_data <- data.table::fread("~/github_repos/ReversalTask/data/PUBS_Batch1_R.csv", fill = TRUE) %>% group_by(subject, time) %>% filter(!subject %in% c(324215, 1))
reversal_data <- reversal_data %>% mutate(subject = ifelse(time == "18:47:13", 456, subject))
subjects <- unique(reversal_data$subject)
summary <- "by_phase" #by_block, by_phase, by_block_phase
Clean and transform Data. Perform basic checks….
trim_cols <- TRUE
reversal_data <- data.frame(lapply(reversal_data, function(x) gsub(",", ".", x, fixed = TRUE)))
reversal <- tidy_reversal(reversal_data)
## Warning in if (data$date < "2021-08-31") {: the condition has length > 1 and
## only the first element will be used
## Warning in if (data$date > "2021-08-31") {: the condition has length > 1 and
## only the first element will be used
check_tidy(reversal)
## # A tibble: 45 x 2
## subject count
## <chr> <int>
## 1 100832 1
## 2 104587 1
## 3 117568 1
## 4 127306 1
## 5 132661 1
## 6 153044 1
## 7 163496 1
## 8 186071 1
## 9 224761 1
## 10 225033 1
## # … with 35 more rows
# bad <- c()
# for(i in 1:nrow(check_tidy)){
# if(count > 1){
# bad[nrow(check_tidy)] <- i
# }
# }
Generating results summary…
Table 1. Performance statistics were calculated by block (1-4), and/or phase (acquisition or reversal). Problematic rows are highlighted yellow.
In summary, all participants learned the task remarkably quickly. They all cleared the first criterion of making at least 10 “correct choices” per block (reached_criterion). Data were flagged as “Below Threshold” if participants chose the “correct” option less than 50% of the time in any given phase (percent_correct).
subject | task_phase | avg_latency | reached_criterion | NumberofTrials | NumberCorrect | percent_correct | above_threshold |
100832 | Acquisiton | 421.93396 | 8.000000 | 106.0000 | 89.00000 | 0.8396226 | Above |
100832 | Reversal | 400.37234 | 11.000000 | 94.0000 | 49.00000 | 0.5212766 | Above |
104587 | Acquisiton | 543.29245 | 8.000000 | 106.0000 | 72.00000 | 0.6792453 | Above |
104587 | Reversal | 493.94681 | 9.000000 | 94.0000 | 64.00000 | 0.6808511 | Above |
117568 | Acquisiton | 423.42857 | 9.000000 | 112.0000 | 66.00000 | 0.5892857 | Above |
117568 | Reversal | 383.29213 | 10.000000 | 89.0000 | 65.00000 | 0.7303371 | Above |
127306 | Acquisiton | 446.83333 | 9.000000 | 90.0000 | 64.00000 | 0.7111111 | Above |
127306 | Reversal | 414.01818 | 10.000000 | 110.0000 | 79.00000 | 0.7181818 | Above |
132661 | Acquisiton | 241.41346 | 10.000000 | 104.0000 | 67.00000 | 0.6442308 | Above |
132661 | Reversal | 173.59375 | 14.000000 | 96.0000 | 51.00000 | 0.5312500 | Above |
153044 | Acquisiton | 466.85714 | 8.000000 | 91.0000 | 69.00000 | 0.7582418 | Above |
153044 | Reversal | 415.41284 | 11.000000 | 109.0000 | 74.00000 | 0.6788991 | Above |
163496 | Acquisiton | 692.08696 | 10.000000 | 92.0000 | 70.00000 | 0.7608696 | Above |
163496 | Reversal | 692.14815 | 15.000000 | 108.0000 | 44.00000 | 0.4074074 | Below |
186071 | Acquisiton | 555.57944 | 8.000000 | 107.0000 | 88.00000 | 0.8224299 | Above |
186071 | Reversal | 462.26882 | 8.000000 | 93.0000 | 51.00000 | 0.5483871 | Above |
224761 | Acquisiton | 592.91489 | 8.000000 | 94.0000 | 60.00000 | 0.6382979 | Above |
224761 | Reversal | 507.44340 | 9.000000 | 106.0000 | 77.00000 | 0.7264151 | Above |
225033 | Acquisiton | 99.65979 | 13.000000 | 97.0000 | 43.00000 | 0.4432990 | Below |
225033 | Reversal | 98.08738 | 12.000000 | 103.0000 | 56.00000 | 0.5436893 | Above |
239619 | Acquisiton | 784.20652 | 8.000000 | 92.0000 | 70.00000 | 0.7608696 | Above |
239619 | Reversal | 780.36111 | 8.000000 | 108.0000 | 71.00000 | 0.6574074 | Above |
250866 | Acquisiton | 364.43011 | 11.000000 | 93.0000 | 48.00000 | 0.5161290 | Above |
250866 | Reversal | 184.53271 | 11.000000 | 107.0000 | 56.00000 | 0.5233645 | Above |
271259 | Acquisiton | 467.04673 | 8.000000 | 107.0000 | 85.00000 | 0.7943925 | Above |
271259 | Reversal | 493.48387 | 8.000000 | 93.0000 | 66.00000 | 0.7096774 | Above |
286390 | Acquisiton | 599.15464 | 8.000000 | 97.0000 | 74.00000 | 0.7628866 | Above |
286390 | Reversal | 525.93204 | 8.000000 | 103.0000 | 65.00000 | 0.6310680 | Above |
336228 | Acquisiton | 416.77381 | 13.000000 | 84.0000 | 52.00000 | 0.6190476 | Above |
336228 | Reversal | 408.75862 | 8.000000 | 116.0000 | 79.00000 | 0.6810345 | Above |
361956 | Acquisiton | 591.71296 | 8.000000 | 108.0000 | 89.00000 | 0.8240741 | Above |
361956 | Reversal | 533.22826 | 8.000000 | 92.0000 | 70.00000 | 0.7608696 | Above |
377598 | Acquisiton | 521.25532 | 8.000000 | 94.0000 | 56.00000 | 0.5957447 | Above |
377598 | Reversal | 426.79245 | 15.000000 | 106.0000 | 47.00000 | 0.4433962 | Below |
390664 | Acquisiton | 384.38542 | 9.000000 | 96.0000 | 79.00000 | 0.8229167 | Above |
390664 | Reversal | 369.68269 | 9.000000 | 104.0000 | 82.00000 | 0.7884615 | Above |
402292 | Acquisiton | 337.59341 | 8.000000 | 91.0000 | 60.00000 | 0.6593407 | Above |
402292 | Reversal | 352.40367 | 9.000000 | 109.0000 | 76.00000 | 0.6972477 | Above |
405983 | Acquisiton | 435.84270 | 9.000000 | 89.0000 | 59.00000 | 0.6629213 | Above |
405983 | Reversal | 419.80180 | 9.000000 | 111.0000 | 87.00000 | 0.7837838 | Above |
412911 | Acquisiton | 363.40351 | 8.000000 | 114.0000 | 75.00000 | 0.6578947 | Above |
412911 | Reversal | 346.39535 | 8.000000 | 86.0000 | 56.00000 | 0.6511628 | Above |
498431 | Acquisiton | 465.15966 | 10.000000 | 119.0000 | 70.00000 | 0.5882353 | Above |
498431 | Reversal | 432.90244 | 9.000000 | 82.0000 | 46.00000 | 0.5609756 | Above |
520216 | Acquisiton | 409.05208 | 8.000000 | 96.0000 | 77.00000 | 0.8020833 | Above |
520216 | Reversal | 326.49038 | 8.000000 | 104.0000 | 72.00000 | 0.6923077 | Above |
545599 | Acquisiton | 344.98990 | 9.000000 | 99.0000 | 57.00000 | 0.5757576 | Above |
545599 | Reversal | 291.19802 | 12.000000 | 101.0000 | 53.00000 | 0.5247525 | Above |
577910 | Acquisiton | 425.62162 | 8.000000 | 111.0000 | 77.00000 | 0.6936937 | Above |
577910 | Reversal | 393.15730 | 10.000000 | 89.0000 | 62.00000 | 0.6966292 | Above |
628645 | Acquisiton | 592.70330 | 8.000000 | 91.0000 | 68.00000 | 0.7472527 | Above |
628645 | Reversal | 532.21101 | 10.000000 | 109.0000 | 76.00000 | 0.6972477 | Above |
642086 | Acquisiton | 385.56190 | 8.000000 | 105.0000 | 94.00000 | 0.8952381 | Above |
642086 | Reversal | 374.34737 | 10.000000 | 95.0000 | 65.00000 | 0.6842105 | Above |
668126 | Acquisiton | 576.84043 | 9.000000 | 94.0000 | 62.00000 | 0.6595745 | Above |
668126 | Reversal | 529.76415 | 11.000000 | 106.0000 | 69.00000 | 0.6509434 | Above |
673855 | Acquisiton | 595.96875 | 10.000000 | 96.0000 | 61.00000 | 0.6354167 | Above |
673855 | Reversal | 558.74038 | 9.000000 | 104.0000 | 68.00000 | 0.6538462 | Above |
710889 | Acquisiton | 537.50495 | 10.000000 | 101.0000 | 62.00000 | 0.6138614 | Above |
710889 | Reversal | 492.06061 | 9.000000 | 99.0000 | 56.00000 | 0.5656566 | Above |
744735 | Acquisiton | 447.49495 | 9.000000 | 99.0000 | 61.00000 | 0.6161616 | Above |
744735 | Reversal | 407.97030 | 8.000000 | 101.0000 | 73.00000 | 0.7227723 | Above |
768782 | Acquisiton | 405.53398 | 8.000000 | 103.0000 | 78.00000 | 0.7572816 | Above |
768782 | Reversal | 379.30928 | 8.000000 | 97.0000 | 74.00000 | 0.7628866 | Above |
808746 | Acquisiton | 489.69474 | 8.000000 | 95.0000 | 76.00000 | 0.8000000 | Above |
808746 | Reversal | 484.16190 | 9.000000 | 105.0000 | 79.00000 | 0.7523810 | Above |
810684 | Acquisiton | 460.47664 | 9.000000 | 107.0000 | 73.00000 | 0.6822430 | Above |
810684 | Reversal | 454.90323 | 9.000000 | 93.0000 | 71.00000 | 0.7634409 | Above |
811078 | Acquisiton | 228.54206 | 11.000000 | 107.0000 | 52.00000 | 0.4859813 | Below |
811078 | Reversal | 218.12903 | 10.000000 | 93.0000 | 45.00000 | 0.4838710 | Below |
832332 | Acquisiton | 471.65625 | 8.000000 | 96.0000 | 78.00000 | 0.8125000 | Above |
832332 | Reversal | 449.51923 | 8.000000 | 104.0000 | 80.00000 | 0.7692308 | Above |
857452 | Acquisiton | 300.89247 | 8.000000 | 93.0000 | 58.00000 | 0.6236559 | Above |
857452 | Reversal | 293.85047 | 8.000000 | 107.0000 | 58.00000 | 0.5420561 | Above |
858150 | Acquisiton | 332.73469 | 11.000000 | 98.0000 | 59.00000 | 0.6020408 | Above |
858150 | Reversal | 198.69608 | 13.000000 | 102.0000 | 58.00000 | 0.5686275 | Above |
858936 | Acquisiton | 628.51376 | 8.000000 | 109.0000 | 86.00000 | 0.7889908 | Above |
858936 | Reversal | 618.60440 | 10.000000 | 91.0000 | 55.00000 | 0.6043956 | Above |
868336 | Acquisiton | 357.79808 | 13.000000 | 104.0000 | 55.00000 | 0.5288462 | Above |
868336 | Reversal | 286.89583 | 12.000000 | 96.0000 | 49.00000 | 0.5104167 | Above |
897077 | Acquisiton | 435.97849 | 8.000000 | 93.0000 | 77.00000 | 0.8279570 | Above |
897077 | Reversal | 385.54206 | 8.000000 | 107.0000 | 81.00000 | 0.7570093 | Above |
906532 | Acquisiton | 448.70588 | 8.000000 | 85.0000 | 62.00000 | 0.7294118 | Above |
906532 | Reversal | 416.56522 | 8.000000 | 115.0000 | 77.00000 | 0.6695652 | Above |
912677 | Acquisiton | 301.44762 | 12.000000 | 105.0000 | 69.00000 | 0.6571429 | Above |
912677 | Reversal | 237.21053 | 11.000000 | 95.0000 | 48.00000 | 0.5052632 | Above |
919986 | Acquisiton | 420.97980 | 10.000000 | 99.0000 | 64.00000 | 0.6464646 | Above |
919986 | Reversal | 375.57843 | 9.000000 | 102.0000 | 66.00000 | 0.6470588 | Above |
994811 | Acquisiton | 413.98876 | 9.000000 | 89.0000 | 62.00000 | 0.6966292 | Above |
994811 | Reversal | 398.59459 | 9.000000 | 111.0000 | 81.00000 | 0.7297297 | Above |
Means | 429.40005 | 9.411111 | 100.0333 | 66.66667 | 0.6662079 |
##N.B. For actual analysis, hang here and create drop_irregular function that prints summaries and has options to get rid of:
## bad subjects (those with < avg accuracy overall)
##bad blocks (blocks with <50% accuracy)
##bad trials (latency > 3000 - this should also encompass noresponse)
Figure 1. This chunk prints an overall view of reaction time.
rts <- list()
for (i in subjects) {
s <- reversal %>% dplyr::filter(subject == i)
rt_hist <- ggplot(s, aes(rt), stat = "bin") + geom_histogram() + ggtitle(i)
rts[[i]] <- rt_hist
}
rts[[length(rts) + 1]] <- rt_hist <- ggplot(reversal, aes(rt), stat = "bin") + geom_histogram() + ggtitle("Overall RTs")
summary(reversal$rt)
## Length Class Mode
## 9003 character character
for (i in subjects) {
d <- reversal %>% dplyr::filter(subject == i)
scatter <- ggplot(d, aes(x=trial_number, y = rightleftcorrect, color = as.factor(ResponseCorrect))) +
geom_point() +
geom_vline(aes(xintercept = reversal_trial, color = "Reversal Point")) +
scale_color_manual(labels = c("Incorrect", "Correct", "Reversal Point"),
values = (wes_palette("Cavalcanti1")[c(5,4,2)])) +
xlab("Trial Number") + ylab("Response Type") + ggtitle(i) +
facet_wrap(~block_number)
scatter[[i]] <- scatter
if (summary == "by_block_phase"){
h <- obj %>% filter(subject == i) %>% group_by(task_phase)
hist <- ggplot(h, aes(y=percent_correct, x = task_phase, fill = above_threshold)) +
geom_bar(stat = "identity") + scale_fill_manual(values = (wes_palette("Cavalcanti1")[c(4,5)])) + ylab("Percentage Correct") + xlab("") + facet_grid(~ block_number)
hist[[i]] <- hist
} else if (summary == "by_block"){
h <- obj %>% filter(subject == i) %>% group_by(block_number)
hist <- ggplot(h, aes(y=percent_correct, x = block_number, fill = above_threshold)) +
geom_bar(stat = "identity") + scale_fill_manual(values = (wes_palette("Cavalcanti1")[c(4,5)])) + ylab("Percentage Correct") + xlab("")
} else if (summary == "by_phase"){
h <- obj %>% filter(subject == i) %>% group_by(task_phase)
hist <- ggplot(h, aes(y=percent_correct, x = task_phase, fill = above_threshold)) +
geom_bar(stat = "identity") + scale_fill_manual(values = (wes_palette("Cavalcanti1")[c(4,5)])) + ylab("Percentage Correct") + xlab("")
}
plot_subjects[[i]] <- list(scatter, hist)
cowplot_list[[i]] <- local({
i <- cowplot <- cowplot::plot_grid(scatter, hist, nrow = 2)
})
}
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 196 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 196 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
## Warning: Removed 196 rows containing missing values (geom_vline).
## Warning: Removed 195 rows containing missing values (geom_vline).
template <- c(
"### Subject {{y}}\n",
"```{r, echo = FALSE}\n",
"cowplot_list[[{{y}}]] \n",
"```\n",
"\n"
)
plots <- lapply(1:length(cowplot_list), function(y) {knitr::knit_expand(text = template)})
a_1 <- aov(rt ~ task_phase, data = reversal)
summary(a_1)
## Df Sum Sq Mean Sq F value Pr(>F)
## task_phase 1 3266890 3266890 66.69 3.61e-16 ***
## Residuals 9001 440935107 48987
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
a_1 <- aov(rt ~ block_number, data = reversal)
summary(a_1)
## Df Sum Sq Mean Sq F value Pr(>F)
## block_number 4 11994702 2998675 62.43 <2e-16 ***
## Residuals 8998 432207296 48034
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
rt_trialnum <- lm(rt ~ total_trialnum + task_phase + block_number, data = reversal)
summary(rt_trialnum)
##
## Call:
## lm(formula = rt ~ total_trialnum + task_phase + block_number,
## data = reversal)
##
## Residuals:
## Min 1Q Median 3Q Max
## -669.60 -95.56 -12.38 79.81 1564.29
##
## Coefficients: (4 not defined because of singularities)
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 908.604 29.849 30.440 < 2e-16 ***
## total_trialnum10 -414.849 42.911 -9.668 < 2e-16 ***
## total_trialnum100 -531.203 43.262 -12.279 < 2e-16 ***
## total_trialnum101 -500.885 43.291 -11.570 < 2e-16 ***
## total_trialnum102 -535.967 43.416 -12.345 < 2e-16 ***
## total_trialnum103 -538.946 43.521 -12.384 < 2e-16 ***
## total_trialnum104 -513.739 43.676 -11.762 < 2e-16 ***
## total_trialnum105 -509.163 43.942 -11.587 < 2e-16 ***
## total_trialnum106 -538.378 43.989 -12.239 < 2e-16 ***
## total_trialnum107 -519.482 44.038 -11.796 < 2e-16 ***
## total_trialnum108 -564.727 44.038 -12.824 < 2e-16 ***
## total_trialnum109 -551.349 44.038 -12.520 < 2e-16 ***
## total_trialnum11 -400.004 42.911 -9.322 < 2e-16 ***
## total_trialnum110 -558.327 44.038 -12.678 < 2e-16 ***
## total_trialnum111 -516.238 44.038 -11.722 < 2e-16 ***
## total_trialnum112 -541.349 44.038 -12.293 < 2e-16 ***
## total_trialnum113 -530.771 44.038 -12.052 < 2e-16 ***
## total_trialnum114 -531.704 44.038 -12.074 < 2e-16 ***
## total_trialnum115 -556.393 44.038 -12.634 < 2e-16 ***
## total_trialnum116 -562.727 44.038 -12.778 < 2e-16 ***
## total_trialnum117 -492.349 44.038 -11.180 < 2e-16 ***
## total_trialnum118 -549.571 44.038 -12.479 < 2e-16 ***
## total_trialnum119 -554.193 44.038 -12.584 < 2e-16 ***
## total_trialnum12 -408.760 42.911 -9.526 < 2e-16 ***
## total_trialnum120 -524.349 44.038 -11.907 < 2e-16 ***
## total_trialnum121 -104.204 42.911 -2.428 0.01519 *
## total_trialnum122 -494.871 42.911 -11.532 < 2e-16 ***
## total_trialnum123 -516.382 42.911 -12.034 < 2e-16 ***
## total_trialnum124 -516.493 42.911 -12.036 < 2e-16 ***
## total_trialnum125 -459.649 42.911 -10.712 < 2e-16 ***
## total_trialnum126 -518.404 42.911 -12.081 < 2e-16 ***
## total_trialnum127 -527.137 42.911 -12.284 < 2e-16 ***
## total_trialnum128 -545.315 42.911 -12.708 < 2e-16 ***
## total_trialnum129 -516.426 42.911 -12.035 < 2e-16 ***
## total_trialnum13 -408.115 42.911 -9.511 < 2e-16 ***
## total_trialnum130 -516.560 42.911 -12.038 < 2e-16 ***
## total_trialnum131 -503.626 42.911 -11.736 < 2e-16 ***
## total_trialnum132 -510.937 42.911 -11.907 < 2e-16 ***
## total_trialnum133 -520.049 42.911 -12.119 < 2e-16 ***
## total_trialnum134 -515.160 42.911 -12.005 < 2e-16 ***
## total_trialnum135 -527.071 42.911 -12.283 < 2e-16 ***
## total_trialnum136 -536.390 42.914 -12.499 < 2e-16 ***
## total_trialnum137 -510.050 42.932 -11.881 < 2e-16 ***
## total_trialnum138 -526.467 43.007 -12.242 < 2e-16 ***
## total_trialnum139 -522.298 43.094 -12.120 < 2e-16 ***
## total_trialnum14 -402.671 42.911 -9.384 < 2e-16 ***
## total_trialnum140 -555.025 43.262 -12.829 < 2e-16 ***
## total_trialnum141 -526.271 43.351 -12.140 < 2e-16 ***
## total_trialnum142 -513.131 43.485 -11.800 < 2e-16 ***
## total_trialnum143 -535.073 43.676 -12.251 < 2e-16 ***
## total_trialnum144 -546.081 43.761 -12.479 < 2e-16 ***
## total_trialnum145 -505.689 43.849 -11.533 < 2e-16 ***
## total_trialnum146 -505.719 43.942 -11.509 < 2e-16 ***
## total_trialnum147 -553.193 44.038 -12.562 < 2e-16 ***
## total_trialnum148 -563.060 44.038 -12.786 < 2e-16 ***
## total_trialnum149 -543.727 44.038 -12.347 < 2e-16 ***
## total_trialnum15 -413.634 42.914 -9.639 < 2e-16 ***
## total_trialnum150 -582.704 44.038 -13.232 < 2e-16 ***
## total_trialnum151 -557.016 44.038 -12.648 < 2e-16 ***
## total_trialnum152 -532.104 44.038 -12.083 < 2e-16 ***
## total_trialnum153 -547.127 44.038 -12.424 < 2e-16 ***
## total_trialnum154 -555.460 44.038 -12.613 < 2e-16 ***
## total_trialnum155 -556.482 44.038 -12.636 < 2e-16 ***
## total_trialnum156 -563.171 44.038 -12.788 < 2e-16 ***
## total_trialnum157 -512.282 44.038 -11.633 < 2e-16 ***
## total_trialnum158 -489.104 44.038 -11.106 < 2e-16 ***
## total_trialnum159 -561.393 44.038 -12.748 < 2e-16 ***
## total_trialnum16 -467.206 42.932 -10.883 < 2e-16 ***
## total_trialnum160 -555.549 44.038 -12.615 < 2e-16 ***
## total_trialnum161 -131.871 42.911 -3.073 0.00212 **
## total_trialnum162 -522.671 42.911 -12.180 < 2e-16 ***
## total_trialnum163 -451.693 42.911 -10.526 < 2e-16 ***
## total_trialnum164 -498.515 42.911 -11.617 < 2e-16 ***
## total_trialnum165 -468.849 42.911 -10.926 < 2e-16 ***
## total_trialnum166 -508.226 42.911 -11.844 < 2e-16 ***
## total_trialnum167 -551.693 42.911 -12.857 < 2e-16 ***
## total_trialnum168 -509.093 42.911 -11.864 < 2e-16 ***
## total_trialnum169 -514.626 42.911 -11.993 < 2e-16 ***
## total_trialnum17 -436.185 42.992 -10.146 < 2e-16 ***
## total_trialnum170 -547.626 42.911 -12.762 < 2e-16 ***
## total_trialnum171 -469.560 42.911 -10.943 < 2e-16 ***
## total_trialnum172 -495.760 42.911 -11.553 < 2e-16 ***
## total_trialnum173 -544.182 42.911 -12.682 < 2e-16 ***
## total_trialnum174 -494.449 42.911 -11.523 < 2e-16 ***
## total_trialnum175 -512.975 42.912 -11.954 < 2e-16 ***
## total_trialnum176 -551.160 42.916 -12.843 < 2e-16 ***
## total_trialnum177 -488.813 42.925 -11.387 < 2e-16 ***
## total_trialnum178 -505.895 42.957 -11.777 < 2e-16 ***
## total_trialnum179 -521.741 42.992 -12.136 < 2e-16 ***
## total_trialnum18 -406.993 43.022 -9.460 < 2e-16 ***
## total_trialnum180 -520.890 43.055 -12.098 < 2e-16 ***
## total_trialnum181 -529.529 43.209 -12.255 < 2e-16 ***
## total_trialnum182 -558.885 43.291 -12.910 < 2e-16 ***
## total_trialnum183 -551.634 43.416 -12.706 < 2e-16 ***
## total_trialnum184 -533.605 43.558 -12.250 < 2e-16 ***
## total_trialnum185 -526.340 43.804 -12.016 < 2e-16 ***
## total_trialnum186 -529.393 44.038 -12.021 < 2e-16 ***
## total_trialnum187 -484.460 44.038 -11.001 < 2e-16 ***
## total_trialnum188 -531.349 44.038 -12.066 < 2e-16 ***
## total_trialnum189 -526.016 44.038 -11.944 < 2e-16 ***
## total_trialnum19 -396.260 43.074 -9.200 < 2e-16 ***
## total_trialnum190 -571.460 44.038 -12.976 < 2e-16 ***
## total_trialnum191 -531.504 44.038 -12.069 < 2e-16 ***
## total_trialnum192 -490.571 44.038 -11.140 < 2e-16 ***
## total_trialnum193 -529.793 44.038 -12.030 < 2e-16 ***
## total_trialnum194 -564.527 44.038 -12.819 < 2e-16 ***
## total_trialnum195 -526.571 44.038 -11.957 < 2e-16 ***
## total_trialnum196 -538.749 44.038 -12.234 < 2e-16 ***
## total_trialnum197 -563.616 44.038 -12.798 < 2e-16 ***
## total_trialnum198 -530.416 44.038 -12.044 < 2e-16 ***
## total_trialnum199 -556.371 44.038 -12.634 < 2e-16 ***
## total_trialnum2 -254.849 42.911 -5.939 2.98e-09 ***
## total_trialnum20 -431.447 43.183 -9.991 < 2e-16 ***
## total_trialnum200 -531.727 44.038 -12.074 < 2e-16 ***
## total_trialnum21 -426.721 43.235 -9.870 < 2e-16 ***
## total_trialnum22 -456.093 43.351 -10.521 < 2e-16 ***
## total_trialnum23 -463.324 43.521 -10.646 < 2e-16 ***
## total_trialnum24 -475.932 43.597 -10.917 < 2e-16 ***
## total_trialnum25 -438.637 43.895 -9.993 < 2e-16 ***
## total_trialnum26 -423.097 43.942 -9.629 < 2e-16 ***
## total_trialnum27 -433.704 44.038 -9.848 < 2e-16 ***
## total_trialnum28 -426.571 44.038 -9.686 < 2e-16 ***
## total_trialnum29 -464.127 44.038 -10.539 < 2e-16 ***
## total_trialnum3 -352.404 42.911 -8.212 2.48e-16 ***
## total_trialnum30 -507.349 44.038 -11.521 < 2e-16 ***
## total_trialnum31 -465.238 44.038 -10.564 < 2e-16 ***
## total_trialnum32 -482.571 44.038 -10.958 < 2e-16 ***
## total_trialnum33 -458.904 44.038 -10.421 < 2e-16 ***
## total_trialnum34 -476.549 44.038 -10.821 < 2e-16 ***
## total_trialnum35 -474.704 44.038 -10.779 < 2e-16 ***
## total_trialnum36 -501.727 44.038 -11.393 < 2e-16 ***
## total_trialnum37 -444.327 44.038 -10.090 < 2e-16 ***
## total_trialnum38 -462.371 44.038 -10.499 < 2e-16 ***
## total_trialnum39 -491.016 44.038 -11.150 < 2e-16 ***
## total_trialnum4 -363.515 42.911 -8.471 < 2e-16 ***
## total_trialnum40 -528.660 44.038 -12.005 < 2e-16 ***
## total_trialnum41 93.663 42.911 2.183 0.02908 *
## total_trialnum42 -446.693 42.911 -10.410 < 2e-16 ***
## total_trialnum43 -418.471 42.911 -9.752 < 2e-16 ***
## total_trialnum44 -457.671 42.911 -10.665 < 2e-16 ***
## total_trialnum45 -496.671 42.911 -11.574 < 2e-16 ***
## total_trialnum46 -510.360 42.911 -11.893 < 2e-16 ***
## total_trialnum47 -490.493 42.911 -11.430 < 2e-16 ***
## total_trialnum48 -481.137 42.911 -11.212 < 2e-16 ***
## total_trialnum49 -455.204 42.911 -10.608 < 2e-16 ***
## total_trialnum5 -402.182 42.911 -9.372 < 2e-16 ***
## total_trialnum50 -483.537 42.911 -11.268 < 2e-16 ***
## total_trialnum51 -460.137 42.911 -10.723 < 2e-16 ***
## total_trialnum52 -492.271 42.911 -11.472 < 2e-16 ***
## total_trialnum53 -473.404 42.911 -11.032 < 2e-16 ***
## total_trialnum54 -497.871 42.911 -11.602 < 2e-16 ***
## total_trialnum55 -505.708 42.912 -11.785 < 2e-16 ***
## total_trialnum56 -524.191 42.947 -12.205 < 2e-16 ***
## total_trialnum57 -520.845 43.007 -12.111 < 2e-16 ***
## total_trialnum58 -502.379 43.055 -11.668 < 2e-16 ***
## total_trialnum59 -463.787 43.094 -10.762 < 2e-16 ***
## total_trialnum6 -363.182 42.911 -8.464 < 2e-16 ***
## total_trialnum60 -527.032 43.159 -12.211 < 2e-16 ***
## total_trialnum61 -513.099 43.235 -11.868 < 2e-16 ***
## total_trialnum62 -519.078 43.320 -11.982 < 2e-16 ***
## total_trialnum63 -523.938 43.450 -12.058 < 2e-16 ***
## total_trialnum64 -487.909 43.597 -11.191 < 2e-16 ***
## total_trialnum65 -494.681 43.761 -11.304 < 2e-16 ***
## total_trialnum66 -512.334 43.989 -11.647 < 2e-16 ***
## total_trialnum67 -508.904 44.038 -11.556 < 2e-16 ***
## total_trialnum68 -491.571 44.038 -11.162 < 2e-16 ***
## total_trialnum69 -527.282 44.038 -11.973 < 2e-16 ***
## total_trialnum7 -383.738 42.911 -8.943 < 2e-16 ***
## total_trialnum70 -494.349 44.038 -11.225 < 2e-16 ***
## total_trialnum71 -502.927 44.038 -11.420 < 2e-16 ***
## total_trialnum72 -541.171 44.038 -12.289 < 2e-16 ***
## total_trialnum73 -528.616 44.038 -12.004 < 2e-16 ***
## total_trialnum74 -485.304 44.038 -11.020 < 2e-16 ***
## total_trialnum75 -484.727 44.038 -11.007 < 2e-16 ***
## total_trialnum76 -525.527 44.038 -11.933 < 2e-16 ***
## total_trialnum77 -515.193 44.038 -11.699 < 2e-16 ***
## total_trialnum78 -486.260 44.038 -11.042 < 2e-16 ***
## total_trialnum79 -520.393 44.038 -11.817 < 2e-16 ***
## total_trialnum8 -410.960 42.911 -9.577 < 2e-16 ***
## total_trialnum80 -516.127 44.038 -11.720 < 2e-16 ***
## total_trialnum81 -1.182 42.911 -0.028 0.97803
## total_trialnum82 -464.093 42.911 -10.815 < 2e-16 ***
## total_trialnum83 -509.138 42.911 -11.865 < 2e-16 ***
## total_trialnum84 -514.026 42.911 -11.979 < 2e-16 ***
## total_trialnum85 -530.360 42.911 -12.359 < 2e-16 ***
## total_trialnum86 -498.538 42.911 -11.618 < 2e-16 ***
## total_trialnum87 -499.782 42.911 -11.647 < 2e-16 ***
## total_trialnum88 -522.804 42.911 -12.183 < 2e-16 ***
## total_trialnum89 -490.915 42.911 -11.440 < 2e-16 ***
## total_trialnum9 -400.915 42.911 -9.343 < 2e-16 ***
## total_trialnum90 -510.093 42.911 -11.887 < 2e-16 ***
## total_trialnum91 -492.182 42.911 -11.470 < 2e-16 ***
## total_trialnum92 -529.760 42.911 -12.345 < 2e-16 ***
## total_trialnum93 -510.271 42.911 -11.891 < 2e-16 ***
## total_trialnum94 -471.226 42.911 -10.981 < 2e-16 ***
## total_trialnum95 -507.702 42.947 -11.821 < 2e-16 ***
## total_trialnum96 -475.666 42.968 -11.070 < 2e-16 ***
## total_trialnum97 -505.926 43.022 -11.760 < 2e-16 ***
## total_trialnum98 -547.231 43.094 -12.699 < 2e-16 ***
## total_trialnum99 -529.610 43.235 -12.250 < 2e-16 ***
## task_phaseReversal 17.678 9.899 1.786 0.07416 .
## block_number2 NA NA NA NA
## block_number3 NA NA NA NA
## block_number4 NA NA NA NA
## block_number5 NA NA NA NA
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 206.8 on 8802 degrees of freedom
## Multiple R-squared: 0.1525, Adjusted R-squared: 0.1333
## F-statistic: 7.922 on 200 and 8802 DF, p-value: < 2.2e-16
emmeans(rt_trialnum, "task_phase")
## task_phase emmean SE df lower.CL upper.CL
## Acquisiton 420 5.45 8802 409 431
## Reversal 438 5.36 8802 427 448
##
## Results are averaged over the levels of: total_trialnum, block_number
## Confidence level used: 0.95
emmeans(rt_trialnum, "block_number")
## block_number emmean SE df lower.CL upper.CL
## 1 497 4.87 8802 487 507
## 2 435 4.87 8802 426 445
## 3 409 4.88 8802 400 419
## 4 399 4.88 8802 389 408
## 5 405 4.88 8802 395 414
##
## Results are averaged over the levels of: total_trialnum, task_phase
## Confidence level used: 0.95
On average, 8 participants made the “correct” choice approximately 65% of the time, and made at least 10 correct choices by trial 10. Both of these outcomes participants are learning the task fine in 5 blocks of 40 trials.
SGP - it would make sense to add in a function which returns particularly bad blocks in terms of RT.
Reaction times averaged approximately 480 ms, which seems pretty good. There were also 5 trials with 0 latency, but it seems like this will inevitably be a feature of the task…
Issues for 9/23 appear addressed. Data is slated for collection the week of 10/11. Review Complete.
11 participants made the “correct” choice approximately 85% of the time, and made at least 10 correct choices by trial 12, on average. Both of these outcomes suggest the block length is working out well with 5 blocks of 40 trials.
SGP added an option at the top of the script to examine data by block, phase, or both. This will hopefully help to adjust views on data easily. it would make sense to add in a function which returns particularly bad blocks in terms of RT.
Average reaction times averaged approximately 200 ms, which is incredibly short. We should spend some time discussing the implications of this. There were also trials with 0 latency, which seems physically impossible.
A few critical concerns came up immediately: First, there are IDs that overlap. This was updated in the qualtrics survey, and luckily data was still distinguishable by time stamp. Second, one of those repeated subject did not complete the task fully (they bailed in the middle of the cannon task). - It’s unclear why, but this means I cannot pay this participant. They also only completed 40% of the surveys, indicating to me they were not going to receive payment anyway. Third, it seems the workerIDs were not recorded in the qualtrics survey. This has now been updated and will be testing this evening.
Overall, these graphs corroborate the idea that participants may be learning incredibly quickly. I worry that this means participants will reach a ceiling very fast, and we won’t see effects of block or learning over time. Adding an explicit instruction for participants to respond as quickly and accurately as possible did speed up the task considerably. Participant’s reaction times were fast - alarmingly fast. The average reaction time was 200ms, and there were trials where some participants had reaction times of 0. This seems problematic for analysis.
This next block is creating all variables I cannot record directly from inquisit for whatever reason (not recording at numeric, wrong block number, etc.). I worked to create as many variables as possible within the task structure, but there are still a few missing that I have to transform offline. 1. Confirm that final data is generating data that makes sense. 2. Generate vector of bad RTS and provide info on possible transforms. 3. Ensure that Qualtrics is not re-using IDs (lengthen randomID string). 4. add workerIds to embedded data so data can be matched across participants more easily.
On average, participants made the “correct” choice approximately 80% of the time, and made at least 10 correct choices by trial 17. Both of these outcomes suggest the block length can be shortened as suggested by MNH to as little as 50 trials. Regarding data verification and visualization, it will be important to look at RT data more explicitly in the larger sample. Based on the pilot, it looks like it could be handled pretty easily with a log or invesrse transform.
Task Recomendations - For Michael Discussion 1. Block length to 50 trials per MNH’s suggestion. Given subjects learned the contingencies in an average of 16 trials, this seems about correct. For time’s sake, I’m thinking of erring on the side of 4 blocks of 50 trials. 2. Explicitly instruct participants to click as quickly and accurately as possible, and shorten response windows such that the entire task will take between 8 and 10 minutes. See “Task Timing” for more details. https://docs.google.com/spreadsheets/d/1V2UD28C_zAfH90BnNO3si6c0-qDUDFbHoU44rfdEt4I/edit#gid=390744519 3. Combine stimulus selection and feedback phases, to save time? 4. Collect 10-15 subjects and check for final timing.
Data Check Plans 1. Translate dplyr operations into usable functions and try to duplicate as little as possible across tasks.
2. drop_irregular function 3. Check RTs and provide info on possible transforms